119 research outputs found
Evaluation of CNN-based Single-Image Depth Estimation Methods
While an increasing interest in deep models for single-image depth estimation
methods can be observed, established schemes for their evaluation are still
limited. We propose a set of novel quality criteria, allowing for a more
detailed analysis by focusing on specific characteristics of depth maps. In
particular, we address the preservation of edges and planar regions, depth
consistency, and absolute distance accuracy. In order to employ these metrics
to evaluate and compare state-of-the-art single-image depth estimation
approaches, we provide a new high-quality RGB-D dataset. We used a DSLR camera
together with a laser scanner to acquire high-resolution images and highly
accurate depth maps. Experimental results show the validity of our proposed
evaluation protocol
Grid Loss: Detecting Occluded Faces
Detection of partially occluded objects is a challenging computer vision
problem. Standard Convolutional Neural Network (CNN) detectors fail if parts of
the detection window are occluded, since not every sub-part of the window is
discriminative on its own. To address this issue, we propose a novel loss layer
for CNNs, named grid loss, which minimizes the error rate on sub-blocks of a
convolution layer independently rather than over the whole feature map. This
results in parts being more discriminative on their own, enabling the detector
to recover if the detection window is partially occluded. By mapping our loss
layer back to a regular fully connected layer, no additional computational cost
is incurred at runtime compared to standard CNNs. We demonstrate our method for
face detection on several public face detection benchmarks and show that our
method outperforms regular CNNs, is suitable for realtime applications and
achieves state-of-the-art performance.Comment: accepted to ECCV 201
Ten Years of Pedestrian Detection, What Have We Learned?
Paper-by-paper results make it easy to miss the forest for the trees.We
analyse the remarkable progress of the last decade by discussing the main ideas
explored in the 40+ detectors currently present in the Caltech pedestrian
detection benchmark. We observe that there exist three families of approaches,
all currently reaching similar detection quality. Based on our analysis, we
study the complementarity of the most promising ideas by combining multiple
published strategies. This new decision forest detector achieves the current
best known performance on the challenging Caltech-USA dataset.Comment: To appear in ECCV 2014 CVRSUAD workshop proceeding
Self-Supervised Relative Depth Learning for Urban Scene Understanding
As an agent moves through the world, the apparent motion of scene elements is
(usually) inversely proportional to their depth. It is natural for a learning
agent to associate image patterns with the magnitude of their displacement over
time: as the agent moves, faraway mountains don't move much; nearby trees move
a lot. This natural relationship between the appearance of objects and their
motion is a rich source of information about the world. In this work, we start
by training a deep network, using fully automatic supervision, to predict
relative scene depth from single images. The relative depth training images are
automatically derived from simple videos of cars moving through a scene, using
recent motion segmentation techniques, and no human-provided labels. This proxy
task of predicting relative depth from a single image induces features in the
network that result in large improvements in a set of downstream tasks
including semantic segmentation, joint road segmentation and car detection, and
monocular (absolute) depth estimation, over a network trained from scratch. The
improvement on the semantic segmentation task is greater than those produced by
any other automatically supervised methods. Moreover, for monocular depth
estimation, our unsupervised pre-training method even outperforms supervised
pre-training with ImageNet. In addition, we demonstrate benefits from learning
to predict (unsupervised) relative depth in the specific videos associated with
various downstream tasks. We adapt to the specific scenes in those tasks in an
unsupervised manner to improve performance. In summary, for semantic
segmentation, we present state-of-the-art results among methods that do not use
supervised pre-training, and we even exceed the performance of supervised
ImageNet pre-trained models for monocular depth estimation, achieving results
that are comparable with state-of-the-art methods
Real-time Embedded Person Detection and Tracking for Shopping Behaviour Analysis
Shopping behaviour analysis through counting and tracking of people in
shop-like environments offers valuable information for store operators and
provides key insights in the stores layout (e.g. frequently visited spots).
Instead of using extra staff for this, automated on-premise solutions are
preferred. These automated systems should be cost-effective, preferably on
lightweight embedded hardware, work in very challenging situations (e.g.
handling occlusions) and preferably work real-time. We solve this challenge by
implementing a real-time TensorRT optimized YOLOv3-based pedestrian detector,
on a Jetson TX2 hardware platform. By combining the detector with a sparse
optical flow tracker we assign a unique ID to each customer and tackle the
problem of loosing partially occluded customers. Our detector-tracker based
solution achieves an average precision of 81.59% at a processing speed of 10
FPS. Besides valuable statistics, heat maps of frequently visited spots are
extracted and used as an overlay on the video stream
Merging pose estimates across space and time
Numerous ‘non-maximum suppression’ (NMS) post-processing schemes have been proposed for merging multiple independent object detections. We propose a generalization of NMS beyond bounding boxes to merge multiple pose estimates in a single frame. The final estimates are centroids rather than medoids as in standard NMS, thus being more accurate than any of the individual candidates. Using the same mathematical framework, we extend our approach to the multi-frame setting, merging multiple independent pose estimates across space and time and outputting both the number and pose of the objects present in a scene. Our approach sidesteps many of the inherent challenges associated with full tracking (e.g. objects entering/leaving a scene, extended periods of occlusion, etc.). We show its versatility by applying it to two distinct state-of-the-art pose estimation algorithms in three domains: human bodies, faces and mice. Our approach improves both detection accuracy (by helping disambiguate correspondences) as well as pose estimation quality and is computationally efficient
Improving Multispectral Pedestrian Detection by Addressing Modality Imbalance Problems
Multispectral pedestrian detection is capable of adapting to insufficient
illumination conditions by leveraging color-thermal modalities. On the other
hand, it is still lacking of in-depth insights on how to fuse the two
modalities effectively. Compared with traditional pedestrian detection, we find
multispectral pedestrian detection suffers from modality imbalance problems
which will hinder the optimization process of dual-modality network and depress
the performance of detector. Inspired by this observation, we propose Modality
Balance Network (MBNet) which facilitates the optimization process in a much
more flexible and balanced manner. Firstly, we design a novel Differential
Modality Aware Fusion (DMAF) module to make the two modalities complement each
other. Secondly, an illumination aware feature alignment module selects
complementary features according to the illumination conditions and aligns the
two modality features adaptively. Extensive experimental results demonstrate
MBNet outperforms the state-of-the-arts on both the challenging KAIST and
CVC-14 multispectral pedestrian datasets in terms of the accuracy and the
computational efficiency. Code is available at
https://github.com/CalayZhou/MBNet
A study of observation scales based on Felzenswalb-Huttenlocher dissimilarity measure for hierarchical segmentation
International audienceHierarchical image segmentation provides a region-oriented scale-space, i.e., a set of image segmentations at different detail levels in which the segmentations at finer levels are nested with respect to those at coarser levels. Guimarães et al. proposed a hierarchical graph based image segmentation (HGB) method based on the Felzenszwalb-Huttenlocher dissimilarity. This HGB method computes, for each edge of a graph, the minimum scale in a hierarchy at which two regions linked by this edge should merge according to the dissimilarity. In order to generalize this method, we first propose an algorithm to compute the intervals which contain all the observation scales at which the associated regions should merge. Then, following the current trend in mathematical morphology to study criteria which are not increasing on a hierarchy, we present various strategies to select a significant observation scale in these intervals. We use the BSDS dataset to assess our observation scale selection methods. The experiments show that some of these strategies lead to better segmentation results than the ones obtained with the original HGB method
Automatic human action recognition in videos by graph embedding
The problem of human action recognition has received increasing attention in recent years for its importance in many applications. Yet, the main limitation of current approaches is that they do not capture well the spatial relationships in the subject performing the action. This paper presents an initial study which uses graphs to represent the actor's shape and graph embedding to then convert the graph into a suitable feature vector. In this way, we can benefit from the wide range of statistical classifiers while retaining the strong representational power of graphs. The paper shows that, although the proposed method does not yet achieve accuracy comparable to that of the best existing approaches, the embedded graphs are capable of describing the deformable human shape and its evolution along the time. This confirms the interesting rationale of the approach and its potential for future performance. © 2011 Springer-Verlag
Online, Real-Time Tracking Using a Category-to-Individual Detector
A method for online, real-time tracking of objects is presented. Tracking is treated as a repeated detection problem where potential target objects are identified with a pre-trained category detector and object identity across frames is established by individual-specific detectors. The individual detectors are (re-)trained online from a single
positive example whenever there is a coincident category detection. This ensures that the tracker is robust to drift. Real-time operation is possible since an individual-object detector is obtained through elementary manipulations of the thresholds of the category detector and therefore only minimal additional computations are required. Our tracking algorithm is benchmarked against nine state-of-the-art trackers on two large, publicly available and challenging video datasets. We find that our algorithm is 10% more accurate and nearly as fast as the fastest of the competing algorithms, and it is as accurate but 20 times faster than the most accurate of the competing algorithms
- …